Two-photon microscopy enables the visualization of living neural circuits with cellular and subcellular resolution at depths up to 1 mm below tissue surfaces (Denk et al., 1990; So et al., 2000). Two-photon excitation occurs only at the focal point where photon density is highest, providing inherent optical sectioning without requiring pinholes (Denk et al., 1990; Luu & Knutsen, 2024). This localized excitation minimizes photobleaching and photodamage while allowing efficient collection of scattered photons (Mostany et al., 2015). Researchers have used two-photon microscopy to map neural connectivity, track synaptic plasticity during learning, visualize dendritic spine dynamics, and record calcium transients from hundreds of neurons simultaneously (Mostany et al., 2015; Luu & Knutsen, 2024). These examples show the diverse ways this technology has been utilized in visualizing aspects of neurobiology. Traditional two-photon microscopes subject individuals to head-fixed positions under large, stationary imaging systems (Mostany et al., 2015). This immobilization limits what behaviors can be studied, such as animals in their natural environment during animal studies. Many researchers require observing brain activity during naturalistic behaviors that are much more difficult to achieve under head fixation (Luu & Knutsen, 2024). A portable two-photon microscope would be transformative for several reasons. First, it would enable imaging during freely moving behaviors, such as social competition, that cannot occur under the restraints of traditional two-photon microscopy (Madruga et al., 2025). Second, it would allow continuous imaging over days to weeks while animals engage in natural behaviors, revealing how neural circuits reorganize during learning and memory consolidation (Madruga et al., 2025). Third, portable systems could eventually translate to clinical applications, including intraoperative imaging during neurosurgery, early detection of neurological diseases, and monitoring treatment responses in human patients—all of which could be revolutionary in the field (Horton et al., 2013; Jun et al., 2024). This illustrates how difficult it currently is to use this technology to observe naturalistic behaviors of humans or animals in non-lab environments and how portability would alter the way researchers can gain information from subjects.
Portable Two-Photon Microscopy
Why This Technology Matters
What Makes It Lab-Bound?
There are considerable size and weight constraints that hinder the portability of two-photon microscopy. The femtosecond pulsed laser represents the largest and heaviest component. Standard Ti:sapphire (Ti:Sa) lasers measure 1-2 meters in length, weigh 50-150 kg, and occupy substantial optical table space (Helmchen & Denk, 2005; Svoboda & Yasuda, 2006). A complete system, including the pump laser (required to excite the Ti:Sa crystal), cooling unit, and beam conditioning optics, occupies 2-4 square meters of lab bench, which is quite significant. The Ti:Sa laser head itself weighs 20-40 kg, with the pump laser adding another 30-60 kg, making it a heavy component that cannot be easily moved around (Svoboda & Yasuda, 2006; Entenberg et al., 2011). Additional beam delivery optics (galvanometer scanners, scan lenses, dichroic mirrors) add 5-10 kg. The microscope objective, detection system, and mounting hardware contribute another 10-20 kg. Total system weight ends up being around 100-200+ kg, which makes it lab-bound (Helmchen & Denk, 2005; Entenberg et al., 2011). Power requirements also present an issue for portability. Ti:sapphire lasers require 5-20 watts of green pump light (532 nm) from argon-ion or solid-state lasers, which themselves consume 10-30 kW of electrical power. The complete system draws 15-40 kW including chillers for temperature stabilization, galvanometer drivers, computers, and detectors (Svoboda & Yasuda, 2006; Helmchen & Denk, 2005; Entenberg et al., 2011). This power requirement necessitates dedicated electrical circuits and climate control. Battery operation is completely infeasible even high-capacity lithium batteries providing 1 kWh would power the system for only 2-4 minutes, which is not enough time to collect a substantial amount of data from a subject (Helmchen & Denk, 2005). Ti:sapphire lasers require closed-loop water cooling or nitrogen gas flow to maintain crystal temperature stability within ±0.1 °C (Svoboda & Yasuda, 2006; Entenberg et al., 2011). Many systems need magnetically shielded environments to prevent beam pointing instability. The optical table must be vibration-isolated (pneumatic or active isolation systems) to prevent focal spot jitter during imaging (Helmchen & Denk, 2005; Svoboda & Yasuda, 2006). These requirements confine systems to temperature-controlled laboratory spaces with specialized infrastructure. Setting up a two-photon microscope for an imaging session requires 30-90 minutes. Users must: (1) align the laser beam path through multiple mirrors and lenses, (2) optimize pulse compression to compensate for dispersion, (3) calibrate galvanometer scanners, (4) position the objective precisely above the sample, and (5) prepare the biological specimen with proper immersion media. All optical components must remain precisely aligned, misalignment degrades image quality (Galinanes et al., 2018). Significant training is required due to the complexity of the system. A complete two-photon microscope costs $400,000 to $1,000,000+. The Ti:sapphire laser alone accounts for $150,000-$250,000 of this cost. A tunable Ti:Sa system with integrated pump costs $60,000-$77,000 even as a basic kit. The microscope frame, scanners, and optics add $100,000-$140,000. High-sensitivity photomultiplier tubes cost $20,000-$50,000 (Excedr, 2025; Del Mar Photonics, 2007).
The following is my ranking of the listed barriers from biggest to least:
- Laser size/weight/power
- Cost (limits accessibility to well-funded institutions)
- Environmental requirements
- Setup complexity
- Regulatory/safety
Your Portable Design
Core Innovation: Integrated Fiber Laser + MEMS Scanner + On-Board Detection
The key insight enabling portable two-photon microscopy is replacing three bulky subsystems with miniaturized alternatives: Ti:sapphire laser → compact fiber laser → galvanometer scanners → MEMS scanning mirror, and external PMT detection → on-board silicon photomultiplier (SiPM) (Thorlabs, 2024; Madruga et al., 2025; Schottdorf et al., 2025). Modern ytterbium-doped fiber lasers produce femtosecond pulses at 920 nm or 1030-1064 nm wavelengths ideal for exciting GFP, GCaMP, and other common fluorophores. These all-fiber systems measure 40 cm x 23 cm x 12 cm and weigh 18 kg for the laser head plus 12 kg for the power supply. A total of 30 kg versus 100-200 kg for Ti:Sa systems (Thorlabs, 2024). Fiber lasers deliver pulses through standard polarization-maintaining optical fiber, eliminating free-space beam alignment (Schottdorf et al., 2025; Thorlabs, 2024). A hollow-core photonic crystal fiber (HC-PCF) carries femtosecond pulses from the laser to the head-mounted microscope without pulse broadening or fiber damage. HC-PCF guides light in an air core, minimizing nonlinear effects and group velocity dispersion that plague solid-core fibers. A 1-meter HC-PCF adds negligible weight (typically <50 grams) while maintaining pulse quality (Tai et al., 2004; Choi et al., 2014). Traditional galvanometer scanners weigh 200-500 g and measure 5-10 cm per axis. MEMS (microelectromechanical systems) scanners weigh <1 gram and measure 3-5 mm on a side. A two-axis MEMS mirror fabricated from single-crystal silicon can achieve ±10° optical scan angle at 4-8 kHz resonant frequency, enabling 15-40 fps imaging at 256x256 pixel resolution (Thorlabs, 2024; Schottdorf et al., 2025). The MEMS scanner uses electrostatic comb-drive actuators requiring only 10-30 V drive voltage and consuming <100 mW power. This enables battery operation. The compact MEMS design eliminates bulky mirror housings and servo electronics, reducing microscope head volume by 10-20x. Traditional two-photon microscopes route fluorescence emission through collection optics to an external photomultiplier tube (PMT) via fiber bundles or descanned detection paths. This adds weight, optical losses, and complexity. Silicon photomultipliers (SiPMs) provide an alternative without these drawbacks (Thorlabs, 2024; Schottdorf et al., 2025). SiPMs are solid-state detectors consisting of avalanche photodiode arrays capable of single-photon detection. They offer 30-40% photon detection efficiency comparable to PMTs but in a 3 mm x 3 mm package weighing <5 grams. SiPMs operate at low voltage (25-30 V), tolerate magnetic fields, and cannot be damaged by ambient light—offering major advantages over PMTs. Additionally, SiPMs mount directly on the microscope head, eliminating collection fiber bundles and enabling non-descanned detection for maximum photon collection efficiency.
The miniature microscope attaches directly to the animal's skull above a chronic cranial window. Dimensions are 2.0 cm x 1.9 cm x 1.1 cm; weight is 2.6-4.0 grams. An ultra-lightweight coaxial cable bundle (diameter: 2-3 mm, weight: 5-10 g/meter) connects the head unit to control electronics and fiber laser (Thorlabs, 2024; Madruga et al., 2025). A briefcase-sized portable unit (40 cm x 30 cm x 20 cm, weight: 35-40 kg including batteries) houses the fiber laser, control electronics, and rechargeable lithium battery pack (Schottdorf et al., 2025). The microscope objective mounts on a flexible arm, allowing positioning near the imaging site. This configuration enables 2-4 hours of continuous operation on battery power, which can allow for use in human and animal studies. Specifications of the portable system include: field of view: 0.17-0.45 mm diameter (Thorlabs, 2024); lateral resolution: 0.6-0.9 μm (Madruga et al., 2025); axial resolution: 3-4 μm; imaging speed: 15-40 Hz at 256x256 pixels (Thorlabs, 2024; Schottdorf et al., 2025); imaging depth: 250-500 μm (up to 900 μm with optimized optics) (Madruga et al., 2025; Schottdorf et al., 2025); signal-to-noise ratio: ~75% of benchtop systems (Thorlabs, 2024); weight: 7-14 g total (head + cable). The setup time is <5 minutes (Thorlabs, 2024). The portable system achieves: 70-80% of benchtop SNR, lateral resolution 20% lower, field of view 0.17-0.45 mm versus 0.5-1.0+ mm, imaging depth 250-500 μm versus 500-900 μm, 50-75x size reduction (2.6-4 g vs. 100-200+ kg), and 5-20x cost reduction ($50-80K vs. $400K-$1M+). Resolution of 0.6-0.9 μm resolves individual neuron somata and spines; 15-40 fps temporal resolution captures calcium transients; 250-500 μm depth accesses superficial cortical and hippocampal regions; field of view captures 100-1000+ neurons depending on density (Madruga et al., 2025; Schottdorf et al., 2025). These specifications make it suitable for real-world use.
Engineering Challenges & Solutions
Challenge 1: Pulse Dispersion Through Fiber Delivery. Femtosecond pulses broaden when propagating through optical fibers; a 100 fs pulse can stretch to 300–500 fs after 1 meter, reducing peak power 3–5× and degrading two-photon efficiency (Choi et al., 2014).
Solution: Use hollow-core photonic crystal fiber with pre-compensation. HC-PCF guides light in air, minimizing dispersion (Tai et al., 2004; Choi et al., 2014). Pre-chirped input introduces negative GVD that recompresses pulses to <100 fs at fiber output with >80% efficiency (Choi et al., 2014).
Challenge 2: Limited Field of View and Optical Aberrations. GRIN lenses in miniature endoscopes exhibit severe off-axis aberrations, limiting the effective field of view to ~60% of the physical aperture.
Solution: 3D-printed aspherical correction elements using two-photon lithography identify and cancel GRIN aberrations (eLife Sciences, 2024). These elements provide 2–3x larger effective field of view, 2–5x higher signal-to-noise ratio in peripheral regions, and enable deep imaging up to 8.8 mm with GRIN lenses (eLife Sciences, 2024).
Challenge 3: Thermal Load and Power Delivery. Miniature microscope dissipates 0.7–1.3 W (laser excitation + electronics) in a 2–4g device, risking <1°C brain tissue warming limit.
Solution: Carbon-fiber-reinforced PEEK housing with high thermal conductivity enables passive convective cooling (PubMed Central, 2009). 80 MHz repetition rate with <0.001% duty cycle minimizes average tissue heating <0.5°C (Madruga et al., 2025). Acousto-optic modulator gates excitation pulses, reducing average power 30–50% (Laser Focus World, 2020). Real-time thermistor monitors temperature with automatic power reduction if warming exceeds 0.5°C (Madruga et al., 2025).
The following are some recent advances in technology that allow my product to be possible:
- Compact fiber lasers (2015–present): TOPTICA, Spark Lasers, Fluence systems reduce size 5–10× to 30 kg; prices dropped from $150K+ to $50–80K (Schottdorf et al., 2025)
- High-efficiency SiPMs (2010–present): 30–40% detection efficiency with minimal dark noise (Thorlabs, 2025)
- Two-photon lithography (2020–present): Fabricates micro-optics in 1–2 hours versus weeks (Nature, 2022)
- Advanced calcium indicators: GCaMP6 provides 5–10× brighter fluorescence than earlier sensors (Madruga et al., 2025)
- AI/ML image processing: Corrects motion artifacts and tracks cell identities across sessions (Fortune Business Insights, 2023)
- 3D-printed MEMS (2022–present): Custom scanners fabricated in <2 hours at <$100/device versus $10,000+ traditionally (Nature, 2022)
Impact & Feasibility
New Applications Enabled
Researchers: Chronic imaging over weeks to track circuit reorganization during learning (Madruga et al., 2025). Record 1000+ neurons across multiple brain regions during unrestricted movement (Labmaker, 2024).
Clinicians: Intraoperative imaging for tumor margin visualization during neurosurgery (Horton et al., 2013). Early Alzheimer's detection via amyloid imaging (Frontiers in Neuroscience, 2024). Monitor treatment responses in neurological diseases (Imperial College London, 2017).
Timeline to Reality
- 2 years (2027): Widespread adoption feasible—fiber laser costs drop to $50-80K, making portable systems affordable for mid-tier labs (Schottdorf et al., 2025).
- 5 years (2030): FDA approval for clinical research feasible after animal safety studies are complete (Frontiers in Neuroscience, 2024). Battery-powered portable units are available for neurosurgery (Schottdorf et al., 2025).
- 10 years (2035): Routine clinical use feasible; costs drop to $40-60K, enabling adoption in settings that are less highly funded, and this begets more accessibility (Excedr, 2025).
Cost Projection
Benchtop Portable
- Total Cost $400K-$1M+ $120K-$180K
- Cost Ratio — 3-8x less
Market Size
- Academic labs: 5,000+ currently use two-photon; potential 15,000-20,000 if <$100K, making it a more affordable option (Fortune Business Insights, 2023)
- Pharma: 500+ CNS drug developers who may choose to use this technology for their research (Fortune Business Insights, 2023)
- Clinical centers: 2,000+ medical centers (Frontiers in Neuroscience, 2024)
- Biotech: 300+ brain-computer interface companies (Fortune Business Insights, 2023)
Validation Strategy
- Signal quality: Compare SNR, spatial resolution, imaging depth between portable and benchtop on calibrated samples. Success: portable achieves >70% of benchtop SNR (Madruga et al., 2025).
- In vivo comparison: Image the same neurons in same mouse on consecutive days with both systems. Success: >85% agreement in detected cells and activity (Madruga et al., 2025).
- Chronic stability: Track neurons over 4-8 weeks with portable system. Success: >80% cell consistency, <20% fluorescence decline (Madruga et al., 2025).
- Behavioral validation: Verify portable microscope doesn't impair behavior. Success: locomotion, social interaction, sleep within 90% of controls.
- Cross-lab standardization: Distribute standardized test samples to multiple labs using open-source analysis software. Success: <10% variance in metrics across labs (Schottdorf et al., 2025).
📎 Supplementary Materials
fc09a102-0c27-4c67-82f0-dd1bb593afcf - Natasha Chemello.png
a7609205-54fb-4186-9422-90384f8ff6d1 - Natasha Chemello.png
📚 References
- Choi, H., Yew, E. Y. S., Hallworth, B., & So, P. T. C. (2014). Improving femtosecond laser pulse delivery through a hollow core photonic crystal fiber for temporally focused wide-field two-photon endomicroscopy. Optics Express, 22(12), 15614-15625. https://doi.org/10.1364/OE.22.015614
- Del Mar Photonics. (2007). Ti:Sapphire laser kits pricing and specifications. https://dmphotonics.com/
- Denk, W., Strickler, J. H., & Webb, W. W. (1990). Two-photon laser scanning fluorescence microscopy. Science, 248(4951), 73-76. https://doi.org/10.1126/science.2321027
- eLife Sciences. (2024, October 20). Aberration correction in long GRIN lens-based endoscopes. eLife. https://elifesciences.org/
- Entenberg, D., Wyckoff, J., Gligorijevic, B., Roussos, E. T., Verkhusha, V. V., Pollard, J. W., & Condeelis, J. (2011). Setup and use of a two-laser multiphoton microscope for multichannel intravital fluorescence imaging. Nature Protocols, 6(10), 1500-1520. https://doi.org/10.1038/nprot.2011.376
- Excedr. (2025, September 30). How much does a multiphoton microscope cost? https://excedr.com/
- Fortune Business Insights. (2023). Multiphoton microscopy market size ($1.4 billion) 2030. https://strategicmarketresearch.com/
- Frontiers in Neuroscience. (2024, January 2). Two-photon excitation fluorescence in ophthalmology. Frontiers in Neuroscience. https://frontiersin.org/
- Galinanes, G. L., Marchand, P. J., Turcotte, R., Pellat, S., Ji, N., & Huber, D. (2018). Optical alignment device for two-photon microscopy. Biomedical Optics Express, 9(8), 3624-3639. https://doi.org/10.1364/BOE.9.003624
- Helmchen, F., & Denk, W. (2005). Deep tissue two-photon microscopy. Nature Methods, 2(12), 932-940. https://doi.org/10.1038/nmeth818
- Horton, N. G., Wang, K., Kobat, D., Clark, C. G., Wise, F. W., Schaffer, C. B., & Xu, C. (2013). In vivo three-photon microscopy of subcortical structures within an intact mouse brain. Nature Photonics, 7(3), 205-209.
- Imperial College London. (2017). Multiphoton fluorescence microscopy for clinical applications. https://imperial.ac.uk/
- Jun, S., Fattahi, H., & Wehner, D. (2024). New insights into the interaction of femtosecond lasers with living tissue. Communications Physics, 7, 135. https://doi.org/10.1038/s42005-024-01653-2
- Labmaker. (2024, December 31). 2-photon miniscope. https://labmaker.org/
- Laser Focus World. (2020, March 23). Femtosecond fiber laser at 920 nm improves two-photon microscopy. https://laserfocusworld.com/
- Luu, P., & Knutsen, P. M. (2024). More than double the fun with two-photon excitation microscopy. Nature, 628(8007), 263-264. https://doi.org/10.1038/d41586-024-00656-x
- Madruga, B. A., Reimer, J., Leifer, A., & Zador, A. (2025). Open-source, high performance miniature 2-photon microscope. bioRxiv. https://doi.org/10.1101/2024.03.30.586807
- Mostany, R., Miquelajauregui, A., Shtrahman, M., & Portera-Cailliau, C. (2015). Two-photon excitation microscopy and its applications in neuroscience. Methods in Molecular Biology, 1148, 27-54. https://doi.org/10.1007/978-1-4939-0470-9_2
- Nature. (2022, September 18). Micro 3D printing of a functional MEMS accelerometer. Nature. https://nature.com/
- PubMed Central. (2009, July 31). In vivo brain imaging using a portable 2.9 g two-photon microscope system. PLOS ONE, 4(7). https://pmc.ncbi.nlm.nih.gov/
- Schottdorf, M., Rich, P. D., Diamanti, E. M., Lin, A., Tafazoli, S., Nieh, E. H., & Thiberge, S. Y. (2025). TWINKLE: An open-source two-photon microscope for teaching and research. PLOS ONE, 20(2), e0297660. https://doi.org/10.1371/journal.pone.0297660
- So, P. T. C., Dong, C. Y., Masters, B. R., & Berland, K. M. (2000). Two-photon excitation fluorescence microscopy. Annual Review of Biomedical Engineering, 2, 399-429. https://doi.org/10.1146/annurev.bioeng.2.1.399
- Svoboda, K., & Yasuda, R. (2006). Principles of two-photon excitation microscopy and its applications to neuroscience. Neuron, 50(6), 823-839. https://doi.org/10.1016/j.neuron.2006.05.019
- Tai, S. P., Chan, M. C., Tsai, T. H., Guol, S. H., Chen, L. J., & Sun, C. K. (2004). Two-photon fluorescence microscope with a hollow-core photonic crystal fiber. Optics Express, 12(25), 6122-6128. https://doi.org/10.1364/OPEX.12.006122
- Thorlabs. (2024, July 18). Miniaturized two-photon microscope (Mini2P). https://thorlabs.com/
- Thorlabs. (2025, June 22). Silicon photomultiplier (SiPM) amplified detectors. https://thorlabs.com/
Other Course Work
📊 Assignment 1: EEG Analysis
📓 NatashaC_Copy_of_assignment1_eeg_filtering.ipynb
Jupyter Notebook💡 Opens in Google Colab for interactive execution. Requires Google account.
🎨 Assignment 2: BrainImation
📄 Code because the downloaded file wouldn't work - Natasha Chemello.pdf
View Original PDF// 🌿 Neurofeedback Meditation Garden — Smooth & Responsive Version 🌿
// Smooth noise motion + speed up at attention = 1.0, slow down at meditation = 1.0.
// For BrainImation EEG environment (uses eegData.meditation & eegData.attention).
let flowers = [];
let clouds = [];
let fireflies = [];
function setup() {
colorMode(HSB, 360, 100, 100);
noiseSeed(random(1000));
for (let i = 0; i < 25; i++) {
flowers.push({
x: random(width),
y: random(height * 0.85, height * 0.95),
baseY: 0,
size: random(20, 40),
bloom: 0.5,
swayOffset: random(1000),
hue: random(250, 285)
});
}
for (let f of flowers) f.baseY = f.y;
for (let i = 0; i < 14; i++) {
clouds.push({
x: random(width),
y: random(height * 0.1, height * 0.4),
size: random(120, 250),
noiseOffset: random(1000),
speedNoise: random(500)
});
}
for (let i = 0; i < 25; i++) {
fireflies.push({
x: random(width),
y: random(height * 0.7, height),
flicker: random(TWO_PI),
hue: random(50, 70),
moveNoise: random(1000)
});
}
}
function draw() {
let calm = eegData.meditation; // alpha
let focus = eegData.attention; // beta
// Map base speed from both brain states:
// Calm slows time, focus accelerates it. Maximum boost at focus=1, calm=0.
let base = map(focus, 0, 1, 1, 7); // up to 7× faster attention
let slow = map(calm, 0, 1, 1, 0.25); // quarter speed at full calm
let tSpeed = base * slow;
drawSky(calm, focus);
drawClouds(tSpeed);
drawGround();
drawFlowers(calm, focus, tSpeed);
drawFireflies(calm, focus, tSpeed);
drawHUD(calm, focus);
}
function drawSky(calm, focus) {
let hue = lerp(200, 255, calm);
let sat = lerp(70, 40, focus);
let bright = lerp(90, 60, calm);
background(hue, sat, bright);
}
function drawClouds(tSpeed) {
noStroke();
for (let c of clouds) {
c.noiseOffset += 0.0015 * tSpeed; // smoother noise drift
c.x = map(noise(c.noiseOffset), 0, 1, -c.size, width + c.size);
c.y += sin(frameCount * 0.002 * tSpeed) * 0.2; // minor float motion
fill(200, 20, 95, 0.6);
ellipse(c.x, c.y, c.size * 0.8, c.size * 0.6);
ellipse(c.x + c.size * 0.25, c.y + 10, c.size * 0.6, c.size * 0.4);
ellipse(c.x - c.size * 0.25, c.y + 10, c.size * 0.6, c.size * 0.4);
}
}
function drawGround() {
noStroke();
fill(115, 35, 75);
rect(0, height * 0.8, width, height * 0.25);
}
function drawFlowers(calm, focus, tSpeed) {
for (let f of flowers) {
// Gradual bloom interpolation
f.bloom = lerp(f.bloom, calm, 0.08);
// Perlin-smooth sway influenced by speed
f.swayOffset += 0.002 * tSpeed;
let sway = map(noise(f.swayOffset), 0, 1, -10 * f.bloom, 10 * f.bloom);
stroke(120, 60, 45);
strokeWeight(2 + 4 * f.bloom);
line(f.x + sway, f.baseY, f.x, f.baseY - f.bloom * 65);
noStroke();
push();
translate(f.x + sway, f.baseY - f.bloom * 65);
for (let i = 0; i < 6; i++) {
let hueShift = map(focus, 0, 1, -10, 10);
fill(f.hue + hueShift, 55, 95, 0.8);
push();
rotate((i * TWO_PI) / 6 + noise(frameCount * 0.0015 * tSpeed));
ellipse(0, -f.size * 0.8, f.size, f.size * 1.6);
pop();
}
fill(55, 90, 95, 0.9);
ellipse(0, 0, f.size * 0.5 + f.bloom * 10);
pop();
}
}
function drawFireflies(calm, focus, tSpeed) {
let count = map(calm, 0, 1, 5, 25);
for (let i = 0; i < count; i++) {
let ff = fireflies[i];
ff.flicker += 0.05 * tSpeed;
ff.moveNoise += 0.001 * tSpeed;
let flicker = (sin(ff.flicker) + 1) / 2;
let wander = map(noise(ff.moveNoise), 0, 1, -5, 5);
fill(ff.hue, 80, 100, flicker);
noStroke();
ellipse(ff.x + wander, ff.y + sin(frameCount * 0.01 * tSpeed) * 2, 4 + flicker * 3);
}
}
function drawHUD(calm, focus) {
noStroke();
fill(0, 0, 20, 0.7);
rect(0, 0, width, 70);
fill(0, 0, 100);
textSize(16);
textAlign(LEFT, TOP);
text("Meditation (Alpha): " + calm.toFixed(2), 20, 10);
text("Attention (Beta): " + focus.toFixed(2), 20, 30);
text("At full focus, the world races; in calm, time slows.", 20, 50);
}
🎥 IMG_6179 - Natasha Chemello.mov
💡 Videos require Google Drive access. Open in new tab if it doesn't load.
🎯 Midterm Project
📄 PSYCH 403 midterm 1 - Natasha Chemello.pdf
View Original PDFPart 1:
I chose to implement a BCI that accomplishes brain state training with a little
game component to it, to build upon a simple meditation landscape. I
implemented a game component where instead of having the fl owers simply
re fl ect your brain state, I made the fl owers react to your brain state, and then
you get a score based upon your brain state (whether it’s dominant in alpha or
beta waves). I tried to include an audio component, but it would not work with
BrainImation in the way I wanted it to.
There are fl owers, clouds, fi re fl ies, and a color scheme; each is manipulated in
some way to provide feedback on how the player is doing. The landscape
becomes more harmonious as a meditation state is detected. This is shown by
the fl owers blooming and deepening in color, clouds slowing/accelerating
gracefully, fi re fl ies fl ickering slowly, and the sky hues shifting from daylight blue
to a deep purple. If attention is detected to be higher, then everything is more
chaotic and glitchy. There will also be a reduction in the score. Progress toward
the stated target ratio gives a small bonus in points. The score will never go
negative, and it directly re fl ects the user’s immediate mental state. The target
ratio for meditation training is 0.8–1.0 (Alpha/Theta), which was decided on
based on some previous researchers’ work on neurofeedback training
(Nooripour et al., 2024). The EEG data collected is immediately used to calculate
an alpha/theta ratio, which can then be used to track progress in the game
mode. When the ratio reaches a desired ratio (0.8–1.0), then the landscape
provides visual feedback of stable, calming colors and less chaos in the
landscape.
This goes beyond basic parameters because instead of just being an endless
loop of the same thing, it shows immediate feedback from the user’s brain state.
It reacts to the user rather than the user reacting to the animation. It also keeps
a game score of the user’s brain state, which is a novel concept.
By implementing this animation, I learned that there is such a thing as a desired
alpha/theta ratio. I’ve continued to learn throughout this course about all the
different BCIs that can be implemented, and it’s really interesting to see the
various ways that brain state can be manipulated or reacted to. I’ve learned
that you can combine different concepts, like a game and neurofeedback, to
create something unique like my animation.
Part 2:
I chose to measure N400 in my ERP experiment. N400 has been shown to be
connected to understanding the meaning of words, which is why I used words
as my stimulus to test this phenomenon. N400 peaks around 400 ms after the
stimulus is presented. I have several words involved including DOG, CAT, BREAD,
BUTTER, CHAIR, TABLE, FORK, CAR, and RAIN. They are grouped into related
categories (congruent), like BREAD and BUTTER for example, and then they are
quickly presented in these categories on the screen. After the congruent pairing
is shown, it is followed by an incongruent pairing, which should produce a larger
N400 because it is particularly reactive to incongruency. The fi rst pairing acts
as a “primer” while the second (incongruent) pairing acts as the “target” for this
experiment. The target is expected to elicit a larger N400. The experiment also
involves the coloring of each word based on it being congruent or incongruent
with the previous word shown; green means they are congruent, and orange
means that they are incongruent. The colors are intentionally chosen to elicit
different reactions because green tends to be related to being correct, while
orange is often seen as being wrong, especially in comparison to green. Thus,
the color component is intended to also aid in creating a stronger N400.
EEG data are collected for each trial ranging from −200 ms to +800 ms around
the time the pairing appears. Each EEG segment is baseline-corrected by
subtracting the average signal from the pre-stimulus window (−200–0 ms), then
categorized as congruent or incongruent. The number of averaged trials for
each condition is displayed at the top. ERPs are generated by averaging all
epochs for each category and plotted using distinct colors on the same graph as
shown in the upper display.
If there were testing with a real EEG, then I would expect the N400 to show a
greater negative de fl ection in the 400 ms post-stimulus region when
incongruent pairings are shown on the screen.
References
Nooripour, R., Viki, M. G., Ghanbari, N., Farmani, F., & Emadi, F. (2024).
Alpha/theta neurofeedback rehabilitation for improving attention
and working memory in female students with learning disabilities.
OBM Neurobiology, 8(3), 229.
https://doi.org/10.21926/obm.neurobiol.2403229
🎥 Screen Recording 2025-10-24 at 4.27.21 PM - Natasha Chemello.mov
💡 Videos require Google Drive access. Open in new tab if it doesn't load.
🎥 Screen Recording 2025-10-24 at 12.10.26 PM - Natasha Chemello.mov
💡 Videos require Google Drive access. Open in new tab if it doesn't load.
📝 Chemello_midterm_part2 - Natasha Chemello.txt
💡 Code is embedded in this portfolio - opens instantly in the live BrainImation editor (no internet required!)
📝 Chemello_midterm_part1 - Natasha Chemello.txt
💡 Code is embedded in this portfolio - opens instantly in the live BrainImation editor (no internet required!)